sessionInfo()Biostat 203B Homework 2
Due Feb 9 @ 11:59PM
Display machine information for reproducibility:
Load necessary libraries (you can add more as needed).
library(arrow)
library(data.table)
library(memuse)
library(pryr)
library(R.utils)
library(tidyverse)Display memory information of your computer
memuse::Sys.meminfo()In this exercise, we explore various tools for ingesting the MIMIC-IV data introduced in homework 1.
Display the contents of MIMIC hosp and icu data folders:
ls -l ~/mimic/hosp/ls -l ~/mimic/icu/Q1. read.csv (base R) vs read_csv (tidyverse) vs fread (data.table)
Q1.1 Speed, memory, and data types
There are quite a few utilities in R for reading plain text data files. Let us test the speed of reading a moderate sized compressed csv file, admissions.csv.gz, by three functions: read.csv in base R, read_csv in tidyverse, and fread in the data.table package.
Answer:
Which function is fastest? Is there difference in the (default) parsed data types? How much memory does each resultant dataframe or tibble use? (Hint: system.time measures run times; pryr::object_size measures memory usage.)
admissions_base <- read.csv("~/mimic/hosp/admissions.csv.gz")
admissions_base
system.time(admissions_base <- read.csv("~/mimic/hosp/admissions.csv.gz"))
pryr::object_size(admissions_base)admissions_tidy <- read_csv("~/mimic/hosp/admissions.csv.gz")
admissions_tidy
system.time(read_csv("~/mimic/hosp/admissions.csv.gz"))
pryr::object_size(admissions_tidy)admissions_data_table <- fread("~/mimic/hosp/admissions.csv.gz")
admissions_data_table
system.time(admissions_data_table <- fread("~/mimic/hosp/admissions.csv.gz"))
pryr::object_size(admissions_data_table)Therefore, the fastest function is fread in the data.table package. The default parsed data types are different among the three functions that read.csv in base R and fread in the data.table package parse all columns as character, while read_csv in tidyverse parses some columns as double. The read.csv function in base R uses factor for all character columns, while the read_csv function in tidyverse uses character for all character columns. The fread function in data.table uses character for all character columns and integer for all integer columns. The memory usage of the resultant dataframe or tibble is 158.71 MB for read.csv in base R, 55.31 MB for read_csv in tidyverse, and 50.13 MB for fread in data.table.
Q1.2 User-supplied data types
Re-ingest admissions.csv.gz by indicating appropriate column data types in read_csv. Does the run time change? How much memory does the result tibble use? (Hint: col_types argument in read_csv.)
Answer:
col_types <- cols(
admission_type = col_character(),
admission_location = col_character(),
discharge_location = col_character(),
insurance = col_character(),
language = col_character(),
marital_status = col_character(),
race = col_character(),
subject_id = col_double(),
hadm_id = col_double(),
hospital_expire_flag = col_double(),
admittime = col_datetime(),
dischtime = col_datetime(),
deathtime = col_datetime(),
edregtime = col_datetime(),
edouttime = col_datetime()
)
admissions_tidy <- read_csv("~/mimic/hosp/admissions.csv.gz", col_types = col_types)
system.time(read_csv("~/mimic/hosp/admissions.csv.gz", col_types = col_types))
pryr::object_size(admissions_tidy)The run time changes from user:0.894 sys:0.104 elapsed:0.999 to user:0.799 sys:0.085 elapsed: 0.443. The memory usage of the resultant tibble is 55.31 MB.
Q2. Ingest and filter big data files
Let us focus on a bigger file, labevents.csv.gz, which is about 125x bigger than admissions.csv.gz.
ls -l ~/mimic/hosp/labevents.csv.gzDisplay the first 10 lines of this file.
zcat < ~/mimic/hosp/labevents.csv.gz | head -10Q2.1 Ingest labevents.csv.gz by read_csv
Try to ingest labevents.csv.gz using read_csv. What happens? If it takes more than 5 minutes on your computer, then abort the program and report your findings.
Answer:
labevents_tidy <- read_csv("~/mimic/hosp/labevents.csv.gz")
system.time(labevents_tidy)The read_csv function in tidyverse takes more than 5 minutes to ingest labevents.csv.gz on my computer. Therefore, I abort the program and report my findings. If not aborted, the output is as follows: Error: vector memory exhausted (limit reached?). This code doesn’t finish within 5 minutes on my computer. The reason is that the file is too large to be ingested by read_csv in tidyverse. The memory usage of the resultant tibble is 1.3 GB.
Q2.2 Ingest selected columns of labevents.csv.gz by read_csv
Try to ingest only columns subject_id, itemid, charttime, and valuenum in labevents.csv.gz using read_csv. Does this solve the ingestion issue? (Hint: col_select argument in read_csv.)
Answer:
labevents_tidy <- read_csv("~/mimic/hosp/labevents.csv.gz", col_select = c(subject_id, itemid, charttime, valuenum))
system.time(labevents_tidy)
pryr::object_size(labevents_tidy)yes, this solves the ingestion issue. the time taken for ingest is 1.5 minutes. The memory usage of the resultant tibble is 1.3 GB.
Q2.3 Ingest subset of labevents.csv.gz
Our first strategy to handle this big data file is to make a subset of the labevents data. Read the MIMIC documentation for the content in data file labevents.csv.
In later exercises, we will only be interested in the following lab items: creatinine (50912), potassium (50971), sodium (50983), chloride (50902), bicarbonate (50882), hematocrit (51221), white blood cell count (51301), and glucose (50931) and the following columns: subject_id, itemid, charttime, valuenum. Write a Bash command to extract these columns and rows from labevents.csv.gz and save the result to a new file labevents_filtered.csv.gz in the current working directory. (Hint: use zcat < to pipe the output of labevents.csv.gz to awk and then to gzip to compress the output. To save render time, put #| eval: false at the beginning of this code chunk.)
Display the first 10 lines of the new file labevents_filtered.csv.gz. How many lines are in this new file? How long does it take read_csv to ingest labevents_filtered.csv.gz?
Answer:
zcat < ~/mimic/hosp/labevents.csv.gz | \
awk -F, 'BEGIN {OFS=","} NR==1 || \
($5==50912 || $5==50971 || $5==50983 || \
$5==50902 || $5==50882 || $5==51221 || \
$5==51301 || $5==50931) {print $2, $5, $7, $10}' | \
gzip > ~/mimic/hosp/labevents_filtered.csv.gz#Display the first 10 lines of the new file labevents_filtered.csv.gz
cd ~/mimic/hosp
zcat < labevents_filtered.csv.gz 2>/dev/null | head -10subject_id,itemid,charttime,valuenum
10000032,50882,2180-03-23 11:51:00,27
10000032,50902,2180-03-23 11:51:00,101
10000032,50912,2180-03-23 11:51:00,0.4
10000032,50971,2180-03-23 11:51:00,3.7
10000032,50983,2180-03-23 11:51:00,136
10000032,50931,2180-03-23 11:51:00,95
10000032,51221,2180-03-23 11:51:00,45.4
10000032,51301,2180-03-23 11:51:00,3
10000032,51221,2180-05-06 22:25:00,42.6
#Display the number of lines in the new file labevents_filtered.csv.gz
zcat < labevents_filtered.csv.gz | wc -l 24855910
#How long does it take read_csv to ingest labevents_filtered.csv.gz?
system.time(read_csv("labevents_filtered.csv.gz"))It takes about 3.55 seconds for read_csv to ingest labevents_filtered.csv.gz on my computer.
Q2.4 Ingest labevents.csv by Apache Arrow
Our second strategy is to use Apache Arrow for larger-than-memory data analytics. Unfortunately Arrow does not work with gz files directly. First decompress labevents.csv.gz to labevents.csv and put it in the current working directory. To save render time, put #| eval: false at the beginning of this code chunk.
Then use arrow::open_dataset to ingest labevents.csv, select columns, and filter itemid as in Q2.3. How long does the ingest+select+filter process take? Display the number of rows and the first 10 rows of the result tibble, and make sure they match those in Q2.3. (Hint: use dplyr verbs for selecting columns and filtering rows.)
Answer:
zcat < ~/mimic/hosp/labevents.csv.gz > ~/labevents.csvsystem.time({
labevents <- arrow::open_dataset(
"~/labevents.csv",
format = "csv"
) %>%
filter(itemid %in% c(50912, 50971, 50983, 50902, 50882, 51221, 51301, 50931)) %>%
select(subject_id, itemid, charttime, valuenum) %>%
collect()
})
labevents_filtered <- labevents %>%
arrange(subject_id, charttime, itemid)
nrow(labevents_filtered)
head(labevents_filtered, 10)The ingest+select+filter process takes about 31.192 seconds. The number of rows is 24855909. The first 10 rows of the result tibble are as follows:
Write a few sentences to explain what is Apache Arrow. Imagine you want to explain it to a layman in an elevator.
Apache Arrow is a cross-language development platform for in-memory data. It specifies a standardized language-independent columnar memory format for flat and hierarchical data, organized for efficient analytic operations on modern hardware. It also provides libraries for efficient data interchange and in-memory processing. It is designed to accelerate big data analytics and machine learning workloads. To explain to others, I would say that Apache Arrow is a platform that can handle large datasets and execute complex queries efficiently.
Q2.5 Compress labevents.csv to Parquet format and ingest/select/filter
Re-write the csv file labevents.csv in the binary Parquet format (Hint: arrow::write_dataset.) How large is the Parquet file(s)? How long does the ingest+select+filter process of the Parquet file(s) take? Display the number of rows and the first 10 rows of the result tibble and make sure they match those in Q2.3. (Hint: use dplyr verbs for selecting columns and filtering rows.)
# Write the CSV file in the binary Parquet format
labevents <- arrow::open_dataset("~/labevents.csv", format = "csv")
arrow::write_dataset(labevents, "~/labevents.parquet")
#How large is the Parquet file(s)?
file.size("~/labevents.parquet")
object.size("~/labevents.parquet")
#How long does the ingest+select+filter process of the Parquet file(s) take?
system.time({
labevents_parquet <- arrow::open_dataset(
"~/labevents.parquet") %>%
dplyr::filter(itemid %in% c(50912, 50971, 50983, 50902, 50882, 51221, 51301, 50931)) %>%
dplyr::select(subject_id, itemid, charttime, valuenum) %>%
collect()
})
#How large is the Parquet file(s)?
file.size("~/labevents.parquet")
object.size("~/labevents.parquet")
labevents_filter_parquet <- labevents_parquet %>%
arrange(subject_id, charttime, itemid)
#Display the number of rows and the first 10 rows of the result tibble
num_row <- nrow(labevents_filter_parquet)
cat("Number of rows:", num_rows, "\n")
head(labevents_filter_parquet, 10)Write a few sentences to explain what is the Parquet format. Imagine you want to explain it to a layman in an elevator.
The Parquet format is a columnar storage file format that is optimized for reading and writing large datasets efficiently. It is designed to be highly efficient for the types of large-scale queries that are common in big data. To explain to others, I would say that the Parquet format is a file format that can handle large datasets and execute complex queries efficiently.
Q2.6 DuckDB
Ingest the Parquet file, convert it to a DuckDB table by arrow::to_duckdb, select columns, and filter rows as in Q2.5. How long does the ingest+convert+select+filter process take? Display the number of rows and the first 10 rows of the result tibble and make sure they match those in Q2.3. (Hint: use dplyr verbs for selecting columns and filtering rows.)
library(duckdb)
library(dbplyr)
#Ingest the Parquet file
labevents_parquet <- arrow::open_dataset("~/labevents.parquet")
system.time({
labevents_duckdb <- labevents_parquet %>%
arrow::to_duckdb(table_name = "labevents_table") %>%
dplyr::filter(itemid %in% c(50912, 50971, 50983, 50902, 50882, 51221, 51301, 50931)) %>%
dplyr::select(subject_id, itemid, charttime, valuenum) %>%
collect()
})
# Step 4: Display number of rows and the first 10 rows of the result tibble
n_rows <- count(labevents_filtered)
print(head(labevents_filtered, 10))Write a few sentences to explain what is DuckDB. Imagine you want to explain it to a layman in an elevator.
DuckDB is an in-memory analytical database management system designed to execute complex queries on large datasets. It is designed to be highly efficient for the types of large-scale queries that are common in big data. I want to use it to ingest, convert, select, and filter large data files in R. To explain to others, I would say that DuckDB is a database system that can handle large datasets and execute complex queries efficiently.
Q3. Ingest and filter chartevents.csv.gz
chartevents.csv.gz contains all the charted data available for a patient. During their ICU stay, the primary repository of a patient’s information is their electronic chart. The itemid variable indicates a single measurement type in the database. The value variable is the value measured for itemid. The first 10 lines of chartevents.csv.gz are
zcat < ~/mimic/icu/chartevents.csv.gz | head -10d_items.csv.gz is the dictionary for the itemid in chartevents.csv.gz.
zcat < ~/mimic/icu/d_items.csv.gz | head -10In later exercises, we are interested in the vitals for ICU patients: heart rate (220045), mean non-invasive blood pressure (220181), systolic non-invasive blood pressure (220179), body temperature in Fahrenheit (223761), and respiratory rate (220210). Retrieve a subset of chartevents.csv.gz only containing these items, using the favorite method you learnt in Q2.
Document the steps and show code. Display the number of rows and the first 10 rows of the result tibble.
Step 1: Decompress chartevents.csv.gz to chartevents.csv
# decompress chartevents.csv.gz to chartevents.csv
zcat < ~/mimic/icu/chartevents.csv.gz > ~/chartevents.csvStep 2: Ingest and filter chartevents.csv and display the number of rows of the result
# choose Parquet format
chartevents <- arrow::open_dataset("~/chartevents.csv", format = "csv")
arrow::write_dataset(chartevents, "~/chartevents.parquet")
system.time({
chartevents_parquet <- arrow::open_dataset(
"~/chartevents.parquet") %>%
dplyr::filter(itemid %in% c(220045, 220181, 220179, 223761, 220210)) %>%
dplyr::select(subject_id, itemid, charttime, valuenum) %>%
collect()
})
chartevents_filtered_parquet <- chartevents_parquet %>%
arrange(subject_id, charttime, itemid)
#Display the number of rows and the first 10 rows of the result tibble
num_rows <- nrow(chartevents_filtered_parquet)
cat("Number of rows:", num_rows, "\n")
head(chartevents_filtered_parquet, 10)